REQUIRED indexing into a sets body not into its contained items as in # IntrinsicEquals indexing into an arbitrary paths last contained item need a return from indexing when it fails

IntrinsicEquals

So the data for this is the type of intrinsic, no index accepted intrinsics are unique to each piece A piece can be unversioned data or versioned verision data

class Intrinsics( IntFlag):
	DATA_ID= auto()
	TAGS= auto()
	STORE_NAME= auto()
	STORE_TYPE= auto()

	VERSIONED_STATUS= auto()

	VERSION_ID= auto()
	VERSION_COMMITING_AUTHOR_NAME= auto()
	VERSION_COMMITING_COMPUTER_NAME= auto()
	VERSION_COMMIT_TIME= auto()
	VERSION_TYPE= auto()

intrinsicTypes= {
	DATA_ID                         : str,
	TAGS                            : set[ Tag],
	STORE_NAME                      : str,
	STORE_TYPE                      : type,

	VERSIONED_STATUS                : VersionedStatus,

	VERSION_ID                      : str| None,
	VERSION_COMMITING_AUTHOR_NAME   : str| None,
	VERSION_COMMITING_COMPUTER_NAME : str| None,
	VERSION_COMMIT_TIME             : float| None,
	VERSION_TYPE                    : CommitType| None,
}

so the data can be one from Intrinsics followed by data of the corrosponding type

value should be converted to the rep that mongo uses

intrinsicQueries= {
	DATA_ID                         : { "_id": value},
	TAGS                            : { "Tags # Construct index into set": { "$all": list( value)}},
	STORE_NAME                      : str, # If not the current store name then can append a filter matching nothing, otherwise matching everything
	STORE_TYPE                      : type, # If not mongo then can append a filter matching nothing, otherwise matching everything

	VERSION_STATUS                  : { "Versions": { "$exists", "bool dependant on value"}},

	# For the below an index must be constructed into the version info
	VERSION_ID                      : ( { "Versions": { "$elemMatch": { "0.0": value}}}, { "Versions.$": 1, "Tags": 1}),
	VERSION_COMMITING_AUTHOR_NAME   : ( { "Versions": { "$elemMatch": { "0.1": value}}}, { "Versions.$": 1, "Tags": 1}),
	VERSION_COMMITING_COMPUTER_NAME : ( { "Versions": { "$elemMatch": { "0.2": value}}}, { "Versions.$": 1, "Tags": 1}),
	VERSION_COMMIT_TIME             : ( { "Versions": { "$elemMatch": { "0.3": processed}}}, { "Versions.$": 1, "Tags": 1}),
	VERSION_TYPE                    : ( { "Versions": { "$elemMatch": { "0.4": value}}}, { "Versions.$": 1, "Tags": 1}),
}

An issue here is that a find query returns matching documents not subsections of documents, how does one filter to specific versions I was going to project to "_id" but how would one do this for versions $elemMatch can be used in the find query to return the version maybe just $ in the projection i think this means that versions version piece s must be filtered for seperately to unversioned pieces It seems like it isnt possible to return specific data here as $ in projection does not accept index s after is A view could be created for retrieving version piece data which would make it possible to not need to retrieve the whole piece in this scenario

HasTag

Data is a single string

{ "Tags # Construct index into set": { "$elemMatch": data}},

DataAtPathEquals

So the data input is ( path, value) value can be ran through a conversion to mongos format after the path is converted to mongos format maybe it is told to be in python in mongo s rep context and then convert to mongo it can be sent to a query op

{ convertedPath: value}

DataAtPathOfType

The input is ( path, pythonType) if it is a custom object then the module and type will be stored at the index specified by path in a dict so we can convert the pythonType into a module and qualname

If it is a bson compliant type then it will be stored in the bson representation this type can be matched against using the $type query operator, this operator takes a bson type ( number or alias) type can take multiple aliases to check against The type can be checked for compliance and the alias found by using a dict

bsonComplianceAndMongoAlias= {
	None: "null",
	bool: "bool",
	int: [ "int", "long"],
	bson.Int64: "long",
	float: "double",
	str: "string",
	list: "array",
	dict: "object",
	datetime: "date",
	bson.Regex: "regex",
	re.compiled: "regex",
	bson.Binary: "binData",
	bson.ObjectId: "objectid",
	bson.DBRef: "dbPointer",
	# bson.Code: Unknown mongo bson alias
	bytes: "binData",
}

This could possibly be stored or somewhat calculated in the bson conversion set

with dict, $type would pass matches for custom objects which isnt desired so dict should instead check for an object with a lack of a prescence of the type and module string s

Combination

so combinations are

ALL_PASS: AND
ANY_PASS: OR
ALL_FAIL: NOR
ANY_FAIL: NAND
ALL_PASS_OR_ALL_FAIL: XNOR
DIFFERENCE_EXISTS: XOR
ALL_PASS: { "$and": [ components]}
ANY_PASS: { "$or": [ components]}
ANY_FAIL: { "$not": { "$and": [ components]}}
ALL_FAIL: { "$nor": [ components]}

ALL_PASS_OR_ALL_FAIL: { "$or": [ { "$and": [ components]}, { "$nor": [ components]}]}
DIFFERENCE_EXISTS: { "$not": { "$or": [ { "$and": [ components]}, { "$nor": [ components]}]}}

the more complex combinations will calculate filters twice unless mongo db optimises this

for situations where we need a list of queries for the children we can recursively call the query dict generate function maybe then we can specify not to combine the reuslts maybe we could get a nice return value for if there was not a full conversion


There seems to be an unresolved issue in versioning that needs exploration

Combining different filters results together

Each filter can be filtered individually but we can group them together

the function creating these should only ever be passed rearrangable filters